Lyapunov Stability Analysis of Gradient Descent-Learning Algorithm in Network Training
نویسندگان
چکیده
منابع مشابه
analysis of power in the network society
اندیشمندان و صاحب نظران علوم اجتماعی بر این باورند که مرحله تازه ای در تاریخ جوامع بشری اغاز شده است. ویژگیهای این جامعه نو را می توان پدیده هایی از جمله اقتصاد اطلاعاتی جهانی ، هندسه متغیر شبکه ای، فرهنگ مجاز واقعی ، توسعه حیرت انگیز فناوری های دیجیتال، خدمات پیوسته و نیز فشردگی زمان و مکان برشمرد. از سوی دیگر قدرت به عنوان موضوع اصلی علم سیاست جایگاه مهمی در روابط انسانی دارد، قدرت و بازتولید...
15 صفحه اولOnline gradient descent learning algorithm†
This paper considers the least-square online gradient descent algorithm in a reproducing kernel Hilbert space (RKHS) without an explicit regularization term. We present a novel capacity independent approach to derive error bounds and convergence results for this algorithm. The essential element in our analysis is the interplay between the generalization error and a weighted cumulative error whi...
متن کاملOnline Learning, Stability, and Stochastic Gradient Descent
In batch learning, stability together with existence and uniqueness of the solution corresponds to well-posedness of Empirical Risk Minimization (ERM) methods; recently, it was proved that CVloo stability is necessary and sufficient for generalization and consistency of ERM ([9]). In this note, we introduce CVon stability, which plays a similar role in online learning. We show that stochastic g...
متن کاملDesigning stable neural identifier based on Lyapunov method
The stability of learning rate in neural network identifiers and controllers is one of the challenging issues which attracts great interest from researchers of neural networks. This paper suggests adaptive gradient descent algorithm with stable learning laws for modified dynamic neural network (MDNN) and studies the stability of this algorithm. Also, stable learning algorithm for parameters of ...
متن کاملThe general inefficiency of batch training for gradient descent learning
Gradient descent training of neural networks can be done in either a batch or on-line manner. A widely held myth in the neural network community is that batch training is as fast or faster and/or more 'correct' than on-line training because it supposedly uses a better approximation of the true gradient for its weight updates. This paper explains why batch training is almost always slower than o...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: ISRN Applied Mathematics
سال: 2011
ISSN: 2090-5564,2090-5572
DOI: 10.5402/2011/145801